Skip to main content Skip to complementary content

Setting general connection parameters

This section describes how to configure general connection properties. For an explanation of how to configure advanced connection properties, see Setting advanced connection properties.

To add a Snowflake on AWS target endpoint to Qlik Replicate:

  1. In the Qlik Replicate Console, click Manage Endpoint Connections to open the Manage Endpoints Connections dialog box.
  2. In the Manage Endpoint Connections dialog box, click New Endpoint Connection.
  3. In the Name field, specify a name for your Snowflake on AWS endpoint.
  4. Optionally, in the Description field, enter a description for the Snowflake on AWS target endpoint.
  5. Select Target as the role.

  6. Select Snowflake on AWS as the Type.

  7. Configure the Snowflake on AWS Target settings as follows:

    • Snowflake Account/Host: Your host name for accessing Snowflake on AWS.
    • Authentication: Select one of the following:
      • Username and password: Enter the username and password of a user authorized to access the Snowflake database.

        Information noteThis authentication method is not supported when Snowpipe Streaming is the Loading method.
      • OAuth: To use OAuth authentication, your Snowflake database must be configured to use OAuth. The process is described in Snowflake documentation:

        Information noteThis authentication method is not supported when Snowpipe Streaming is the Loading method.

        Configure Snowflake OAuth for Custom Clients

        External OAuth

        • Authorize URL: The IdP server for requesting authorization codes. The authorization URL format depends on the IdP.

          For Snowflake:

          https://<yourSnowflakeAccount>/oauth/authorize

           

          For Okta:

          https://<yourOktaDomain>/oauth2/<authorizationServerId>/v1/authorize

        • Token URL: The IdP server used to exchange the authorization code for an access token. The access token URL format depends on the IdP.

          For Snowflake:

          https://<yourSnowflakeAccount>/oauth/token-request

           

          For Okta:

          https://<yourOktaDomain>/oauth2/<authorizationServerId>/v1/token

        • Client ID: The client ID of your application.

        • Client secret: The client secret of your application.

        • Scope: You might be required to specify at least one scope attribute, depending on your IdP configuration. Scope attributes must be separated by a space. Refer to your IdP's online help for information about the available scopes and their respective formats.

        • Use default proxy settings: Select to connect via a proxy server when clicking Generate. Note that the proxy settings must be defined in the server setting's Endpoints tab.

        • Refresh token: The refresh token value. Click Generate to generate a new refresh token. When you click Generate, your IdP will prompt you for your access credentials. Once you have provided the credentials, the Refresh token field will be populated with the token value.

        Warning noteThe IdP must not be configured to rotate the refresh token.
        Information note

        When using Replicate Console, the OAuth redirect URL is https://{hostname}/attunityreplicate/rest/oauth_complete.

        When using Enterprise Manager, the OAuth redirect URL is https://{hostname}/attunityenterprisemanager/rest/oauth_complete.

        The {hostname} part of the URL should be replaced by the domain from which you want to connect (Enterprise Manager, Replicate on Windows, Replicate on Linux, or Replicate on Windows using port 3552).

         

        If you connect to Replicate with a hostname that differs from the hostname in the redirect URL (configured in your IdP), you need to add that name to the end of the <REPLICATE-INSTALL-DIR>\bin\repctlcfg file in the following format (using localhost as an example):

         

        "address": localhost

         

        Then restart the Qlik Replicate Server service.

      • Key Pair: Select and then provide the following information:

        • Username: The username of a user authorized to access the Snowflake database.
        • Private key file: The full path to your Private key file (in PEM format).

          Example: C:\Key\snow.pem

        • Private key passphrase: If the private key file is encrypted, specify the passphrase.
    • Database name: The name of your Snowflake database.
  8. Configure the Data Loading settings as follows:

    • Loading method: Select Bulk Loading (the default) or Snowpipe Streaming.

      Information noteIf you select Snowpipe Streaming, make sure that you are aware of the limitations of this method.

      The main reasons to choose Snowpipe Streaming over Bulk Loading are: 

      • Less costly: As Snowpipe Streaming does not use the Snowflake warehouse, operating costs should be significantly cheaper, although this will depend on your specific use case.

      • Reduced latency: As the data is streamed directly to the target tables (as opposed to via staging), replication from source to target should be faster.

    • When Bulk Loading is selected, the following properties are available:

      • Warehouse: The name of your Snowflake warehouse.
      • Staging type: Select either Snowflake (the default) or AWSS3. When Snowflake is selected, Snowflake's internal storage will be used.
        • When AWS S3 is selected, you also need to provide the following information:
          • Bucket name: The name of the Amazon S3 bucket to where the files will be copied.
          • Bucket region:
          • The region where your bucket is located. It is recommended to leave the default (Auto-Detect) as it usually eliminates the need to select a specific region. However, due to security considerations, for some regions (for example, AWS GovCloud) you might need to explicitly select the region. If the region you require does not appear in the regions list, select Other and specify the code in the Region code field.

            For a list of region codes, see AWS Regions.

          • Access type: Choose one of the following:
            • Key pair - Choose this method to authenticate with your Access Key and Secret Key. Then provide the following additional information:
              • Access key: Type the access key information for Amazon S3.
              • Secret key: Type the secret key information for Amazon S3.
            • IAM Roles for EC2 - Choose this method if the machine on which Qlik Replicate is installed is configured to authenticate itself using an IAM role. Then provide the following additional information:
              • External stage name: The name of your external stage. To use the IAM Roles for EC2 access type, you must create an external stage that references the S3 bucket.

              To use the IAM Roles for EC2 access method, you also need to fulfill the prerequisites described in Prerequisite for using the IAM Roles for EC2 Access Type.

          • Folder: The bucket folder to where the files will be copied.
      • Max file size (MB): Relevant for Full Load and CDC. The maximum size a file can reach before it is loaded to the target. If you encounter performance issues, try adjusting this parameter.

      • Number of file to load in a batch: Relevant for Full Load only. The number of files to load in a single batch. If you encounter performance issues, try adjusting this parameter.

      • Batch load timeout (seconds): If you encounter frequent timeouts when loading the files, try increasing this value.

Information noteThe information for these properties is available from the account page for Amazon Web Services (AWS) with the Snowflake on AWS cluster. If you do not have these values, refer to your AWS account or the Snowflake on AWS System Administrator for your enterprise.

Prerequisite for using the IAM Roles for EC2 Access Type

To use the IAM Roles for EC2 access type, you must run the following commands on the Snowflake on AWS database before running the task:

Command 1:

create or replace file format MY_FILE_FORMAT TYPE='CSV' field_delimiter=','compression='GZIP' record_delimiter='\n' null_if=('attrep_null') skip_header=0 FIELD_OPTIONALLY_ENCLOSED_BY='\"';

Command 2:

create or replace stage “PUBLIC”.MY_S3_STAGE file_format=MY_FILE_FORMAT url='s3://MY_STORAGE_URL' credentials=(aws_role='MY_IAM_ROLE');

Where:

MY_FILE_FORMAT - Can be any value.

MY_S3_STAGE - The name specified in the External stage name field above.

MY_STORAGE_URL - The URL of your Amazon S3 bucket

MY_IAM_ROLE - Your IAM role name.

Information note

 

To determine if you are connected to the database you want to use or if the connection information you entered is correct, click Test Connection.

If the connection is successful a message in green is displayed. If the connection fails, an error message is displayed at the bottom of the dialog box.

To view the log entry if the connection fails, click View Log. The server log is displayed with the information for the connection failure. Note that this button is not available unless the test connection fails.

Did this page help you?

If you find any issues with this page or its content – a typo, a missing step, or a technical error – let us know how we can improve!